processing stage
- North America > United States > California > Alameda County > Berkeley (0.04)
- North America > Canada (0.04)
- Europe > Romania > București - Ilfov Development Region > Municipality of Bucharest > Bucharest (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- North America > Canada (0.04)
- Europe > Romania > București - Ilfov Development Region > Municipality of Bucharest > Bucharest (0.04)
- Asia > Middle East > Jordan (0.04)
SemanticSugarBeets: A Multi-Task Framework and Dataset for Inspecting Harvest and Storage Characteristics of Sugar Beets
Croonen, Gerardus, Trondl, Andreas, Simon, Julia, Steininger, Daniel
While sugar beets are stored prior to processing, they lose sugar due to factors such as microorganisms present in adherent soil and excess vegetation. Their automated visual inspection promises to aide in quality assurance and thereby increase efficiency throughout the processing chain of sugar production. In this work, we present a novel high-quality annotated dataset and two-stage method for the detection, semantic segmentation and mass estimation of post-harvest and post-storage sugar beets in monocular RGB images. We conduct extensive ablation experiments for the detection of sugar beets and their fine-grained semantic segmentation regarding damages, rot, soil adhesion and excess vegetation. For these tasks, we evaluate multiple image sizes, model architectures and encoders, as well as the influence of environmental conditions. Our experiments show an mAP50-95 of 98.8 for sugar-beet detection and an mIoU of 64.0 for the best-performing segmentation model.
- Food & Agriculture > Agriculture (0.69)
- Health & Medicine (0.46)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.95)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.88)
- Information Technology > Artificial Intelligence > Vision (0.70)
An MEG Study of Response Latency and Variability in the Human Visual System During a Visual-Motor Integration Task
Human reaction times during sensory-motor tasks vary consider(cid:173) ably. To begin to understand how this variability arises, we exam(cid:173) ined neuronal populational response time variability at early versus late visual processing stages. The conventional view is that pre(cid:173) cise temporal information is gradually lost as information is passed through a layered network of mean-rate "units." We tested in hu(cid:173) mans whether neuronal populations at different processing stages behave like mean-rate "units". A blind source separation algorithm was applied to MEG signals from sensory-motor integration tasks.
Soft Sensing Transformer: Hundreds of Sensors are Worth a Single Word
Zhang, Chao, Yella, Jaswanth, Huang, Yu, Qian, Xiaoye, Petrov, Sergei, Rzhetsky, Andrey, Bom, Sthitie
With the rapid development of AI technology in recent years, there have been many studies with deep learning models in soft sensing area. However, the models have become more complex, yet, the data sets remain limited: researchers are fitting million-parameter models with hundreds of data samples, which is insufficient to exercise the effectiveness of their models and thus often fail to perform when implemented in industrial applications. To solve this long-lasting problem, we are providing large scale, high dimensional time series manufacturing sensor data from Seagate Technology to the public. We demonstrate the challenges and effectiveness of modeling industrial big data by a Soft Sensing Transformer model on these data sets. Transformer is used because, it has outperformed state-of-the-art techniques in Natural Language Processing, and since then has also performed well in the direct application to computer vision without introduction of image-specific inductive biases. We observe the similarity of a sentence structure to the sensor readings and process the multi-variable sensor readings in a time series in a similar manner of sentences in natural language. The high-dimensional time-series data is formatted into the same shape of embedded sentences and fed into the transformer model. The results show that transformer model outperforms the benchmark models in soft sensing field based on auto-encoder and long short-term memory (LSTM) models. To the best of our knowledge, we are the first team in academia or industry to benchmark the performance of original transformer model with large-scale numerical soft sensing data.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Europe > Ireland (0.04)
- Asia > China (0.04)
- (6 more...)
Artificial intelligence: Towards a better understanding of the underlying mechanisms
The automatic identification of complex features in images has already become a reality thanks to artificial neural networks. Some examples of software exploiting this technique are Facebook's automatic tagging system, Google's image search engine and the animal and plant recognition system used by iNaturalist. We know that these networks are inspired by the human brain, but their working mechanism is still mysterious. New research, conducted by SISSA in association with the Technical University of Munich and published for the 33rd Annual NeurIPS Conference, proposes a new approach for studying deep neural networks and sheds new light on the image elaboration processes that these networks are able to carry out. Similar to what happens in the visual system, neural networks used for automatic image recognition analyse the content progressively, through a chain of processing stages.
Finding out how neural nets do what they do
Now scientist from Italian research institute SISSA and the Technical University of Munich have found a light to shine inside – an approach for studying deep neural networks that reveals the processes that they are able to carry out – so long as they are image processing networks. "We have developed a method to systematically measure the level of complexity of the information encoded in the various layers of a deep network – the so-called intrinsic dimension of image representations," according to SISSA scientists Davide Zoccolan and Alessandro Laio. "Thanks to the collaboration of experts in physics, neurosciences and machine learning, we have exploited a tool originally developed in another area to study the functioning of deep neural networks". Working with Jakob Macke, of TUMunich, they applied the method to find out that, inside an image recognition deep neural network, representations of the image undergo a progressive transformation. Similar to what happens in the visual system, they analyse content progressively, through a chain of processing stages.
Artificial intelligence: Towards a better understanding of the underlying mechanisms
The automatic identification of complex features in images has already become a reality thanks to artificial neural networks. Some examples of software exploiting this technique are Facebook's automatic tagging system, Google's image search engine and the animal and plant recognition system used by iNaturalist. We know that these networks are inspired by the human brain, but their working mechanism is still mysterious. New research, conducted by SISSA in association with the Technical University of Munich and published for the 33rd Annual NeurIPS Conference, proposes a new approach for studying deep neural networks and sheds new light on the image elaboration processes that these networks are able to carry out. Similar to what happens in the visual system, neural networks used for automatic image recognition analyse the content progressively, through a chain of processing stages.
Artificial intelligence: Towards a better understanding of the underlying mechanisms
The automatic identification of complex features in images has already become a reality thanks to artificial neural networks. Some examples of software exploiting this technique are Facebook's automatic tagging system, Google's image search engine and the animal and plant recognition system used by iNaturalist. We know that these networks are inspired by the human brain, but their working mechanism is still mysterious. New research, conducted by SISSA in association with the Technical University of Munich and published for the 33rd Annual NeurIPS Conference, proposes a new approach for studying deep neural networks and sheds new light on the image elaboration processes that these networks are able to carry out. Similar to what happens in the visual system, neural networks used for automatic image recognition analyse the content progressively, through a chain of processing stages.
Consciousness And The Inter Mind
Conscious Artificial Intelligence Using The Inter Mind Model. 10 Human Consciousness Transfer Using The Inter Mind Model. 10 Reality Is A Simulation Using The Inter Mind Model. 10 If A Tree Falls In A Forest Using The Inter Mind Model. 10 The Big Bang happens and a new Universe is created. This Universe consists of Matter, Energy, and Space. After billions of years of complicated interactions and processes the Matter, Energy, and Space produce a planet with Conscious Life Forms (CLFs). In the course of their evolution the CLFs will need to See each other in order to live and interact with each other. But what does it really mean to See? A CLF is first of all a Physical Thing. There is no magic power that just lets a CLF See another CLF. A CLF can only Detect another CLF through some sensing mechanism which must be made out of Physical material and which uses Physical processes. There never is any kind of Seeing in the sense that we think we understand it. There is always only ...